Goto

Collaborating Authors

 causal inference engine


CLadder: Assessing Causal Reasoning in Language Models

Neural Information Processing Systems

The ability to perform causal reasoning is widely considered a core feature of intelligence. In this work, we investigate whether large language models (LLMs) can coherently reason about causality. Much of the existing work in natural language processing (NLP) focuses on evaluating commonsense causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined formal rules. To address this, we propose a new NLP task, causal inference in natural language, inspired by the "causal inference engine" postulated by Judea Pearl et al. We compose a large dataset, CLadder, with 10K samples: based on a collection of causal graphs and queries (associational, interventional, and counterfactual), we obtain symbolic questions and ground-truth answers, through an oracle causal inference engine.


From statistical learning to acting and thinking in an imagined space

#artificialintelligence

"If we really want to build a machine on the verge of human-level intelligence, we need to ditch current statistical and data-driven learning paradigm in favour of a causal-based approach." In the 1970s and early 1980s, computer scientists believed that the manipulation of symbols provided a priori by humans was sufficient for computer systems to exhibit intelligence and solve seemingly hard problems. This hypothesis came to be known as the symbol-rule hypothesis. However, despite some initial encouraging progress, such as computer chess and theorem proving, it soon became apparent that rule-based systems could not solve problems that appear seemingly simple to humans. "It is comparatively easy to make computers exhibit adult level performance […] and difficult or impossible to give them the skills of a one-year-old".